Strong minimax lower bounds for learning

ثبت نشده
چکیده

Minimax lower bounds for concept learning state, for example, that for each sample size n and learning rule gn, there exists a distribution of the observation X and a concept C to be learnt such that the expected error of gn is at least a constant times V=n, where V is the vc dimension of the concept class. However, these bounds do not tell anything about the rate of decrease of the error for a xed distribution-concept pair. In this paper we investigate minimax lower bounds in such a|stronger|sense. We show that for several natural k-parameter concept classes, including the class of linear halfspaces, the class of balls, the class of polyhedra with a certain number of faces, and a class of neural networks, for any sequence of learning rules fgng, there exists a xed distribution of X and a xed concept C such that the expected error is larger than a constant times k=n for in nitely many n. We also obtain such strong minimax lower bounds for the tail distribution of the probability of error, which extend the corresponding minimax lower bounds. A. Antos is with the Department of Mathematics and Computer Science, Faculty of Electrical Engineering, Technical University of Budapest, 1521 Stoczek u.2, Budapest, Hungary. (email: [email protected]). G. Lugosi is with the Department of Economics, Pompeu Fabra University, Ramon Trias Fargas, 25-27, 08005 Barcelona, Spain. (email: [email protected]).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Strong Minimax Lower Bounds for Learning

Known minimax lower bounds for learning state that for each sample size n, and learning rule g n , there exists a distribution of the observation X and a concept C to be learnt such that the expected error of g n is at least a constant times V =n, where V is the VC dimension of the concept class. However, these bounds do no tell anything about the rate of decrease of the error for a xed distrib...

متن کامل

On Bayes Risk Lower Bounds

This paper provides a general technique for lower bounding the Bayes risk of statistical estimation, applicable to arbitrary loss functions and arbitrary prior distributions. A lower bound on the Bayes risk not only serves as a lower bound on the minimax risk, but also characterizes the fundamental limit of any estimator given the prior knowledge. Our bounds are based on the notion of f -inform...

متن کامل

Minimax Lower Bounds for Dictionary Learning from Tensor Data

This paper provides lower bounds on the sample complexity of estimating Kronecker-structured dictionaries for Kth-order tensor data. The results suggest the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data.

متن کامل

Optimal Non-Asymptotic Lower Bound on the Minimax Regret of Learning with Expert Advice

We prove non-asymptotic lower bounds on the expectation of the maximum of d independent Gaussian variables and the expectation of the maximum of d independent symmetric random walks. Both lower bounds recover the optimal leading constant in the limit. A simple application of the lower bound for random walks is an (asymptotically optimal) non-asymptotic lower bound on the minimax regret of onlin...

متن کامل

Minimax Lower Bounds for Realizable Transductive Classification

Transductive learning considers a training set of m labeled samples and a test set of u unlabeled samples, with the goal of best labeling that particular test set. Conversely, inductive learning considers a training set of m labeled samples drawn iid from P (X,Y ), with the goal of best labeling any future samples drawn iid from P (X). This comparison suggests that transduction is a much easier...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998